151 research outputs found

    Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis

    Full text link
    The availability of large-scale annotated image datasets and recent advances in supervised deep learning methods enable the end-to-end derivation of representative image features that can impact a variety of image analysis problems. Such supervised approaches, however, are difficult to implement in the medical domain where large volumes of labelled data are difficult to obtain due to the complexity of manual annotation and inter- and intra-observer variability in label assignment. We propose a new convolutional sparse kernel network (CSKN), which is a hierarchical unsupervised feature learning framework that addresses the challenge of learning representative visual features in medical image analysis domains where there is a lack of annotated training data. Our framework has three contributions: (i) We extend kernel learning to identify and represent invariant features across image sub-patches in an unsupervised manner. (ii) We initialise our kernel learning with a layer-wise pre-training scheme that leverages the sparsity inherent in medical images to extract initial discriminative features. (iii) We adapt a multi-scale spatial pyramid pooling (SPP) framework to capture subtle geometric differences between learned visual features. We evaluated our framework in medical image retrieval and classification on three public datasets. Our results show that our CSKN had better accuracy when compared to other conventional unsupervised methods and comparable accuracy to methods that used state-of-the-art supervised convolutional neural networks (CNNs). Our findings indicate that our unsupervised CSKN provides an opportunity to leverage unannotated big data in medical imaging repositories.Comment: Accepted by Medical Image Analysis (with a new title 'Convolutional Sparse Kernel Network for Unsupervised Medical Image Analysis'). The manuscript is available from following link (https://doi.org/10.1016/j.media.2019.06.005

    Review of Positron Emission Tomography at Royal Prince Alfred Hospital, CHERE Project Report No 18

    Get PDF
    This report is a review of the clinical uses, impacts on clinical management, clinical outcome and resource use of Positron Emission Tomography (PET) at Royal Prince Alfred Hospital (RPAH).Positron emission tomography

    Improving Skin Lesion Segmentation via Stacked Adversarial Learning

    Get PDF
    Segmentation of skin lesions is an essential step in computer aided diagnosis (CAD) for the automated melanoma diagnosis. Recently, segmentation methods based on fully convolutional networks (FCNs) have achieved great success for general images. This success is primarily related to FCNs leveraging large labelled datasets to learn features that correspond to the shallow appearance and the deep semantics of the images. Such large labelled datasets, however, are usually not available for medical images. So researchers have used specific cost functions and post-processing algorithms to refine the coarse boundaries of the results to improve the FCN performance in skin lesion segmentation. These methods are heavily reliant on tuning many parameters and post-processing techniques. In this paper, we adopt the generative adversarial networks (GANs) given their inherent ability to produce consistent and realistic image features by using deep neural networks and adversarial learning concepts. We build upon the GAN with a novel stacked adversarial learning architecture such that skin lesion features can be learned, iteratively, in a class-specific manner. The outputs from our method are then added to the existing FCN training data, thus increasing the overall feature diversity. We evaluated our method on the ISIC 2017 skin lesion segmentation challenge dataset; we show that it is more accurate and robust when compared to the existing skin state-of-the-art methods

    Occlusion and Slice-Based Volume Rendering Augmentation for PET-CT

    Get PDF
    Dual-modality positron emission tomography and computed tomography (PET-CT) depicts pathophysiological function with PET in an anatomical context provided by CT. Three-dimensional volume rendering approaches enable visualization of a two-dimensional slice of interest (SOI) from PET combined with direct volume rendering (DVR) from CT. However, because DVR depicts the whole volume, it may occlude a region of interest, such as a tumor in the SOI. Volume clipping can eliminate this occlusion by cutting away parts of the volume, but it requires intensive user involvement in deciding on the appropriate depth to clip. Transfer functions that are currently available can make the regions of interest visible, but this often requires complex parameter tuning and coupled pre-processing of the data to define the regions. Hence, we propose a new visualization algorithm where a SOI from PET is augmented by volumetric contextual information from a DVR of the counterpart CT so that the obtrusiveness from the CT in the SOI is minimized. Our approach automatically calculates an augmentation depth parameter by considering the occlusion information derived from the voxels of the CT in front of the PET SOI. The depth parameter is then used to generate an opacity weight function that controls the amount of contextual information visible from the DVR. We outline the improvements with our visualization approach compared to other slice-based and our previous approaches. We present the preliminary clinical evaluation of our visualization in a series of PET-CT studies from patients with non-small cell lung cancer

    Efficient PET-CT image retrieval using graphs embedded into a vector space

    Get PDF
    Combined positron emission tomography and computed tomography (PET-CT) produces functional data (from PET) in relation to anatomical context (from CT) and it has made a major contribution to improved cancer diagnosis, tumour localisation, and staging. The ability to retrieve PET-CT images from large archives has potential applications in diagnosis, education, and research. PET-CT image retrieval requires the consideration of modality-specific 3D image features and spatial contextual relationships between features in both modalities. Graph-based retrieval methods have recently been applied to represent contextual relationships during PET-CT image retrieval. However, accurate methods are computationally complex, often requiring offline processing, and are unable to retrieve images at interactive rates. In this paper, we propose a method for PET-CT image retrieval using a vector space embedding of graph descriptors. Our method defines the vector space in terms of the distance between a graph representing a PET-CT image and a set of fixed-sized prototype graphs; each vector component measures the dissimilarity of the graph and a prototype. Our evaluation shows that our method is significantly faster (≈800× speedup, p 0.05)

    Creating Graph Abstractions for the Interpretation of Combined Functional and Anatomical Medical Images

    Get PDF
    The characteristics of the images produced by advanced scanning technologies has led to medical imaging playing a critical role in modern healthcare. The most advanced medical scanners combine different modalities to produce multi-dimensional (3D/4D) complex data that is time-consuming and challenging interpret. The assimilation of these data is further compounded when multiple such images have to be compared, e.g., when assessing a patient’s response to treatment or results from a clinical search engine. Abstract representations that present the important discriminating characteristics of the data have the potential to prioritise the critical information in images and provide a more intuitive overview of the data, thereby increasing productivity when interpreting multiple complex medical images. Such abstractions act as a preview of the overall information and allow humans to decide when detailed inspection is necessary. Graphs are a natural method for abstracting medical images as they can represent the relationships between any pathology and the anatomical structures they affect. In this paper, we present a scheme for creating abstract graph visualisations that facilitate an intuitive comparison of the anatomy-pathology relationships within complex medical images. The properties of our abstractions are derived from the characteristics of regions of interest (ROIs) within the images. We demonstrate how our scheme is used to preview, interpret, and compare the location of tumours within volumetric (3D) functional and anatomical images
    • …
    corecore